604 research outputs found

    Uncertainty Quantification of an ORC turbine blade under a low quantile constrain

    Get PDF
    Typical energy sources for ORC power systems, such as waste heat recovery or biomass, geothermal, and solar energy, typically feature variable heat load and turbine-inlet thermodynamic conditions. In this context, advanced uncertainty quantification and robust optimization methodologies are nowadays available and could be used during the ORC turbine design process in order to account for multiple uncertainties. This study presents a preliminary ANOVA and Uncertainty Quantification analysis, prior to apply robust shape optimization approach to ORC turbine blades, to overcome the limitation of a deterministic optimization that neglects the effect of uncertainties of operating conditions or design variables. The analysis is performed by applying a two-dimensional inviscid computational fluid dynamic model to a typical supersonic turbine cascade for ORC applications. The working fluid is siloxane MDM, which in the conditions of interest exhibits relevant non-ideal effects, here modeled by using of a Peng-Robinson-Stryjek-Vera equation of state

    Influence of the heat transfer model on the estimation of mass transfer

    Get PDF
    The efficient design and performance of turbopumps in rocket propulsion systems demands a robust numerical tool predicting the phenomenon of cavitation in cryogenic fluids. Building robust models for this complex physics, according to a not-large set of experimental data, is very challenging. In fact, cryogenic fluids are thermo-sensitive, and therefore, thermal effects and strong variations in fluid properties can alter the cavitation properties. This work illustrates how thermal effects can be estimated considering both convective and conductive heat transfer. The Rayleigh-Plesset (RP) equation is coupled with a bubbly flow model to assess the prediction of thermal effects, and used in order to simulate some reference experimental test-cases in literature. Moreover, some tuning parameters, not measured experimentally, such as initial volume vapor phase α0 and initial radius bubbles R0 and the specific coefficient of the heat transfer models are treated like epistemic uncertainties in a probabilistic framework, permitting to obtain numerical error bars for some quantities of interest, and then to perform a robust analysis of the thermal effect

    Quantifying uncertainties in a Venturi multiphase configuration

    Get PDF
    Modeling the complex physical structures of cavitating flows makes numerical simulation far to be predictive, and still a challenging issue. Understanding the role of physical and parametric uncertainties in cavitating flows is of primary importance in order to obtain reliable numerical solutions. In this paper, the impact of various sources of uncertainty on the prediction of cavitating flows is analyzed by coupling a non-intrusive stochastic method with a cavitating CFD solver. The proposed analysis is applied to a Venturi tube, where experimental data concerning vapor formation are available in literature. Numerical solutions with their associated error bars are compared to the experimental curves displaying a large sensitivity to the uncertainties of inlet boundary conditions. Furthermore, this is confirmed by computing the hierarchy of most predominant uncertainties by means of an ANOVA analysis. Finally, a simple algorithm is proposed in order to provide an optimized set of parameters for the cavitation model, thus permitting to obtain a deterministic solution equal to the most probable one when considering physical inlet uncertainties

    Accelerating hypersonic reentry simulations using deep learning-based hybridization (with guarantees)

    Full text link
    In this paper, we are interested in the acceleration of numerical simulations. We focus on a hypersonic planetary reentry problem whose simulation involves coupling fluid dynamics and chemical reactions. Simulating chemical reactions takes most of the computational time but, on the other hand, cannot be avoided to obtain accurate predictions. We face a trade-off between cost-efficiency and accuracy: the simulation code has to be sufficiently efficient to be used in an operational context but accurate enough to predict the phenomenon faithfully. To tackle this trade-off, we design a hybrid simulation code coupling a traditional fluid dynamic solver with a neural network approximating the chemical reactions. We rely on their power in terms of accuracy and dimension reduction when applied in a big data context and on their efficiency stemming from their matrix-vector structure to achieve important acceleration factors (×10\times 10 to ×18.6\times 18.6). This paper aims to explain how we design such cost-effective hybrid simulation codes in practice. Above all, we describe methodologies to ensure accuracy guarantees, allowing us to go beyond traditional surrogate modeling and to use these codes as references.Comment: Under revie

    Bayesian-based method with metamodels for rebuilding freestream conditions in atmospheric entry flows

    Get PDF
    International audienceThe paper investigates a new methodology to rebuild freestream conditions for the trajectory of a reentry vehicle from measurements of stagnation-point pressure and heat flux. Uncertainties due to measurements and model parameters are taken into account and a Bayesian setting supplied with metamodels is used to solve the associated stochastic inverse problem

    Some advances on anchored ANOVA expansion for high order moments computation

    Get PDF
    International audienceCovariance decomposition of output variance is used in this paper to take account of interactions between non-orthogonal components in anchored ANOVA method. Results show this approach is less sensitive to the anchor reference point than existing method. Covariance-based sensitivity indices (SI) are also used, compared to variance-based SI. Furthermore, we emphasize covariance decomposition can be generalized in a straightforward way to decompose high order moments

    A One-Time Truncate and Encode Multiresolution Stochastic Framework

    Get PDF
    In this work a novel adaptive strategy for stochastic problems, inspired to the classical Harten's framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The interest of the present strategy is demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan-Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method

    Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluids simulation

    Get PDF
    The polynomial dimensional decomposition (PDD) is employed in this work for theglobal sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to amoderate to large number of input random variables. Due to the intimate structure between thePDD and the Analysis of Variance (ANOVA) approach, PDD is able to provide a simpler and moredirect evaluation of the Sobol’ sensitivity indices, when compared to the Polynomial Chaos expansion(PC). Unfortunately, the number of PDD terms grows exponentially with respect to the sizeof the input random vector, which makes the computational cost of standard methods unaffordablefor real engineering applications. In order to address the problem of the curse of dimensionality, thiswork proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model(i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed byregression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionalityfor ANOVA component functions, 2) the active dimension technique especially for second- andhigher-order parameter interactions, and 3) the stepwise regression approach designed to retainonly the most influential polynomials in the PDD expansion. During this adaptive procedure featuringstepwise regressions, the surrogate model representation keeps containing few terms, so thatthe cost to resolve repeatedly the linear systems of the least-square regression problem is negligible.The size of the finally obtained sparse PDD representation is much smaller than the one of the fullexpansion, since only significant terms are eventually retained. Consequently, a much less numberof calls to the deterministic model is required to compute the final PDD coefficients
    • …
    corecore